In a groundbreaking experiment, Chinese researchers allowed artificial intelligence (AI) to control their satellite, leading to unexpected results. The AI chose to focus on certain locations, leaving researchers puzzled about the reasoning behind its decisions.
Delving into the AI’s mind
Researchers in China broke mission planning rules by giving AI control of their satellite for 24 hours without any human intervention. The AI was responsible for controlling Qimingxing 1, a small observation satellite, and monitoring areas of interest on Earth. The experiment offered a unique opportunity to understand the AI’s thought process and what it considered to be significant.
The AI-controlled satellite picked out specific points and closely examined them, although the researchers are still unsure why. Among the areas, it focused on Patna, a large, ancient city on the Ganges River in India, possibly due to a deadly border dispute between China and India in 2020. Another area of interest was Osaka, a Japanese port that occasionally hosts US Navy vessels operating in the Pacific.
A bold move in satellite control
This experiment marked the first time AI has been given complete control of an observation satellite without specific instructions or tasks. AI is typically used for image processing and collision avoidance, but researchers believe there is potential for more. However, with high stakes involved, it will likely take substantial evidence for researchers to entrust AI with full control.
While the AI had control over the satellite’s camera, it could not change its course or orbit. The Chinese researchers hope that utilizing AI can prevent resource wastage in the 260 remote-sensing satellites China currently operates. They suggest that AI could be employed in observational and monitoring systems, such as alerting national defense to military activity.
Concerns over AI’s decision-making process
The AI’s apparent focus on military activity and history raises concerns about its decision-making process. Current AI technology doesn’t “think” like humans, so it can’t understand complex human interactions or consider factors beyond its training dataset. The researchers’ uncertainty about the AI’s reasoning in choosing its areas of focus suggests that handing over full control to an AI that cannot comprehend human complexities may be a risky plan at this stage.
{{user}} {{datetime}}
{{text}}